Towards an Empirical Foundation for Assessing Bayesian Optimization of Hyperparameters
نویسندگان
چکیده
Progress in practical Bayesian optimization is hampered by the fact that the only available standard benchmarks are artificial test functions that are not representative of practical applications. To alleviate this problem, we introduce a library of benchmarks from the prominent application of hyperparameter optimization and use it to compare Spearmint, TPE, and SMAC, three recent Bayesian optimization methods for hyperparameter optimization.
منابع مشابه
An Efficient Approach for Assessing Hyperparameter Importance
The performance of many machine learning methods depends critically on hyperparameter settings. Sophisticated Bayesian optimization methods have recently achieved considerable successes in optimizing these hyperparameters, in several cases surpassing the performance of human experts. However, blind reliance on such methods can leave end users without insight into the relative importance of diff...
متن کاملAn Empirical Bayes Approach to Optimizing Machine Learning Algorithms
I Most models and the algorithms for fitting them have hyperparameters η . e.g., number of layers in a neural network, gradient descent learning rate parameters, number of topics in a topic model, prior variance. I Existing methods for choosing them include expert knowledge, grid search, random sampling, or Bayesian optimization (BayesOpt) [Snoek et al. 2012]. I BayesOpt is an automated way of ...
متن کاملTowards efficient Bayesian Optimization for Big Data
We present a new Bayesian optimization method, environmental entropy search (EnvES), suited for optimizing the hyperparameters of machine learning algorithms on large datasets. EnvES executes fast algorithm runs on subsets of the data and probabilistically extrapolates their performance to reason about performance on the entire dataset. It considers the dataset size as an additional degree of f...
متن کاملFreeze-Thaw Bayesian Optimization
In machine learning, the term “training” is used to describe the procedure of fitting a model to data. In many popular models, this fitting procedure is framed as an optimization problem, in which a loss is minimized as a function of the parameters. In all but the simplest machine learning models, this minimization must be performed with an iterative algorithm such as stochastic gradient descen...
متن کاملActive Contextual Entropy Search
Contextual policy search allows adapting robotic movement primitives to different situations. For instance, a locomotion primitive might be adapted to different terrain inclinations or desired walking speeds. Such an adaptation is often achievable by modifying a small number of hyperparameters. However, learning, when performed on real robotic systems, is typically restricted to a small number ...
متن کامل